822 research outputs found

    Mathematical modeling for fabricating a microstructure with a pre-specified geometry using laser -induced chemical vapor deposition

    Get PDF
    Laser-induced chemical vapor deposition (LCVD) is an emerging new technique with many practical applications. To optimize the system for fabricating a microstructure with a pre-specified geometry in pyrolytic LCVD, a three-dimensional mathematical model is developed for predicting temperature distributions and laser dwell times across the substrate scanned by the laser beam. A microstructure is fabricated layer by layer, and for each layer the laser beam moves from one pixel to the next. The complicated correlations among temperature distribution, deposit growth rate, and laser dwell time are investigated. A purely heterogeneous reaction is assumed and any gas-phase transport is ignored. A finite difference scheme and an iterative numerical algorithm were developed for solving the model. The numerical computation is stable and convergent. The normal growth at each pixel is computed from the geometry of the deposit and the temperature distribution is obtained when the laser beam is focused at different pixels. From the temperature and normal growth, the dwell time for every pixel of each deposit layer is predicted. The processes for fabricating a convex and a concave microlens with a prespecified geometry in pyrolytic LCVD with a Gaussian laser beam were simulated. Nickel and graphite were selected as materials for deposit and substrate, respectively. Factors such as intensity of the laser beam and geometry of the microstructure are discussed. The temperature distributions when the laser beam is focused at different pixels on the surface of deposit were obtained and analyzed for each layer. The dwell time distribution, which determines the laser scanning pattern, is predicted. The process for fabricating a microlens is quite different from that of a rod. The maximum temperature on the surface of the deposit decreases with an increase in the deposit thickness. This result indicates that when the temperature reaches a certain threshold, growth will stop unless the laser intensity is increased

    An adsorbed gas estimation model for shale gas reservoirs via statistical learning

    Full text link
    Shale gas plays an important role in reducing pollution and adjusting the structure of world energy. Gas content estimation is particularly significant in shale gas resource evaluation. There exist various estimation methods, such as first principle methods and empirical models. However, resource evaluation presents many challenges, especially the insufficient accuracy of existing models and the high cost resulting from time-consuming adsorption experiments. In this research, a low-cost and high-accuracy model based on geological parameters is constructed through statistical learning methods to estimate adsorbed shale gas conten

    Place recognition: An Overview of Vision Perspective

    Full text link
    Place recognition is one of the most fundamental topics in computer vision and robotics communities, where the task is to accurately and efficiently recognize the location of a given query image. Despite years of wisdom accumulated in this field, place recognition still remains an open problem due to the various ways in which the appearance of real-world places may differ. This paper presents an overview of the place recognition literature. Since condition invariant and viewpoint invariant features are essential factors to long-term robust visual place recognition system, We start with traditional image description methodology developed in the past, which exploit techniques from image retrieval field. Recently, the rapid advances of related fields such as object detection and image classification have inspired a new technique to improve visual place recognition system, i.e., convolutional neural networks (CNNs). Thus we then introduce recent progress of visual place recognition system based on CNNs to automatically learn better image representations for places. Eventually, we close with discussions and future work of place recognition.Comment: Applied Sciences (2018

    Editorial: Deep Learning for Toxicity and Disease Prediction

    Get PDF

    Influence of Al and Al_2O_3 Nanoparticles on the Thermal Decay of 1,3,5-Trinitro-1,3,5-triazinane (RDX): Reactive Molecular Dynamics Simulations

    Get PDF
    Metallic additives, Al nanoparticles in particular, have extensively been used in energetic materials (EMs), of which thermal decomposition is one of the most basic properties. Nevertheless, the underlying mechanism for the highly active Al nanoparticles and their oxidized counterparts, the Al_2O_3 nanoparticles, influencing the thermal decay of aluminized EMs has not fully been understood. Herein, we explore the influence of Al and Al2O3 nanoparticles on the thermal decomposition of 1,3,5-trinitro-1,3,5-triazinane (RDX), one of the most common EMs, based on large-scale reactive force field molecular dynamics simulations within three heating schemes (constant-temperature, programmed, and adiabatic heating). The presence of Al nanoparticles significantly reduces the induction time and energy required to activate the RDX decay and greatly increases energy release. The fundamental reason for these results is that Al changes the primary decay pathway from the unimolecular Nā€“NO_2 scission of RDX to bimolecular barrier-free or low-barrier Al-involved reactions and possesses a strong O-extraction capability and a moderate one to react with C/H/N. It is also responsible for the growth of the Al-containing clusters. In addition, Al_2O_3 nanoparticles can also demonstrate such catalysis capability but contribute less to the enhancement of energy release. Moreover, the detailed evolutions of key thermodynamic properties, intermediate and final gaseous products, and Al-containing products are also presented. Besides, under the programmed heating and adiabatic heating conditions, the catalysis of the Al and Al_2O_3 nanoparticles becomes more distinct. Therefore, many properties of aluminized EMs are expected to well be understood by our simulation results

    Identification of new members of hydrophobin family using primary structure analysis

    Get PDF
    BACKGROUND: Hydrophobins are fungal proteins that can turn into amphipathic membranes at hydrophilic/hydrophobic interfaces by self-assembly. The assemblages by Class I hydrophobins are extremely stable and possess the remarkable ability to change the polarity of the surface. One of its most important industrial applications is its usage as paint. Without detailed knowledge of the 3D structure and self-assembly principles of hydrophobins, it is difficult to make significant progress in furthering its research. RESULTS: In order to provide useful information to hydrophobin researchers, we analyzed primary structure of hydrophobins to gain more insight about these proteins. In this paper, we presented an in-depth primary sequence analysis using batch BLAST search of the database, sequence filtering by programming and motif finding by MEME. We used batch BLAST to find similar sequences in the NCBI nr database. Then we used MEME to find out motifs. Based on the newly found motifs and the well-known C-CC-C-C-CC-C pattern we used MAST to search the entire nr database. At the end, domain search and phylogenetic analysis were conducted to confirm the result. After searching the nr database with the new PSSM-format motifs identified by MEME, many sequences from various species were found by MAST. Filtering process by pattern, domain and length left 9 qualified candidates. CONCLUSION: All of 9 newly identified potential hydrophobins possess the common pattern and hydrophobin domain. From the multiple sequence alignment result, we can see that some of them are grouped very close to other known hydrophobins, which means their phylogenetic relationship is very close and it is highly plausible that they are indeed hydrophobin proteins

    Inferring Single-Cell 3D Chromosomal Structures Based On the Lennard-Jones Potential

    Get PDF
    Reconstructing threeā€dimensional (3D) chromosomal structures based on singleā€cell Hiā€C data is a challenging scientific problem due to the extreme sparseness of the singleā€cell Hiā€C data. In this research, we used the Lennardā€Jones potential to reconstruct both 500 kb and highā€resolution 50 kb chromosomal structures based on singleā€cell Hiā€C data. A chromosome was represented by a string of 500 kb or 50 kb DNA beads and put into a 3D cubic lattice for simulations. A 2D Gaussian function was used to impute the sparse singleā€cell Hiā€C contact matrices. We designed a novel loss function based on the Lennardā€Jones potential, in which the Īµ value, i.e., the well depth, was used to indicate how stable the binding of every pair of beads is. For the bead pairs that have singleā€cell Hiā€C contacts and their neighboring bead pairs, the loss function assigns them stronger binding stability. The Metropolisā€“Hastings algorithm was used to try different locations for the DNA beads, and simulated annealing was used to optimize the loss function. We proved the correctness and validness of the reconstructed 3D structures by evaluating the models according to multiple criteria and comparing the models with 3Dā€FISH data

    A Comparative Study of \u3ci\u3eK\u3c/i\u3e-Spectrum-Based Error Correction Methods for Next-Generation Sequencing Data Analysis

    Get PDF
    Background: Innumerable opportunities for new genomic research have been stimulated by advancement in high-throughput next-generation sequencing (NGS). However, the pitfall of NGS data abundance is the complication of distinction between true biological variants and sequence error alterations during downstream analysis. Many error correction methods have been developed to correct erroneous NGS reads before further analysis, but independent evaluation of the impact of such dataset features as read length, genome size, and coverage depth on their performance is lacking. This comparative study aims to investigate the strength and weakness as well as limitations of some newest k-spectrum-based methods and to provide recommendations for users in selecting suitable methods with respect to specific NGS datasets. Methods: Six k-spectrum-based methods, i.e., Reptile, Musket, Bless, Bloocoo, Lighter, and Trowel, were compared using six simulated sets of paired-end Illumina sequencing data. These NGS datasets varied in coverage depth (10Ɨ to 120Ɨ), read length (36 to 100 bp), and genome size (4.6 to 143 MB). Error Correction Evaluation Toolkit (ECET) was employed to derive a suite of metrics (i.e., true positives, false positive, false negative, recall, precision, gain, and F-score) for assessing the correction quality of each method. Results: Results from computational experiments indicate that Musket had the best overall performance across the spectra of examined variants reflected in the six datasets. The lowest accuracy of Musket (F-scoreā€‰=ā€‰0.81) occurred to a dataset with a medium read length (56 bp), a medium coverage (50Ɨ), and a small-sized genome (5.4 MB). The other five methods underperformed (F-scoreā€‰\u3cā€‰0.80) and/or failed to process one or more datasets. Conclusions: This study demonstrates that various factors such as coverage depth, read length, and genome size may influence performance of individual k-spectrum-based error correction methods. Thus, efforts have to be paid in choosing appropriate methods for error correction of specific NGS datasets. Based on our comparative study, we recommend Musket as the top choice because of its consistently superior performance across all six testing datasets. Further extensive studies are warranted to assess these methods using experimental datasets generated by NGS platforms (e.g., 454, SOLiD, and Ion Torrent) under more diversified parameter settings (k-mer values and edit distances) and to compare them against other non-k-spectrum-based classes of error correction methods

    Assessing Relevance of Tweets for Risk Communication

    Get PDF
    Although Twitter is used for emergency management activities, the relevance of tweets during a hazard event is still open to debate. In this study, six different computational (i.e. Natural Language Processing) and spatiotemporal analytical approaches were implemented to assess the relevance of risk information extracted from tweets obtained during the 2013 Colorado flood event. Primarily, tweets containing information about the flooding events and its impacts were analysed. Examination of the relationships between tweet volume and its content with precipitation amount, damage extent, and official reports revealed that relevant tweets provided information about the event and its impacts rather than any other risk information that public expects to receive via alert messages. However, only 14% of the geo-tagged tweets and only 0.06% of the total fire hose tweets were found to be relevant to the event. By providing insight into the quality of social media data and its usefulness to emergency management activities, this study contributes to the literature on quality of big data. Future research in this area would focus on assessing the reliability of relevant tweets for disaster related situational awareness
    • ā€¦
    corecore